15 research outputs found

    Perceptually Meaningful Image Editing: Depth

    Get PDF
    We introduce the concept of perceptually meaningful image editing and present two techniques for manipulating the apparent depth of objects in an image. The user loads an image, selects an object and specifies whether the object should appear closer or further away. The system automatically determines target values for the object and/or background that achieve the desired depth change. These depth editing operations, based on techniques used by traditional artists, manipulate either the luminance or color temperature of different regions of the image. By performing blending in the gradient domain and reconstruction with a Poisson solver, the appearance of false edges is minimized. The results of a preliminary user study, designed to evaluate the effectiveness of these techniques, are also presented

    The Real Effect of Warm-Cool Colors

    Get PDF
    The phenomenon of warmer colors appearing nearer in depth to viewers than cooler colors has been studied extensively by psychologists and other vision researchers. The vast majority of these studies have asked human observers to view physically equidistant, colored stimuli and compare them for relative depth. However, in most cases, the stimuli presented were rather simple: straight colored lines, uniform color patches, point light sources, or symmetrical objects with uniform shading. Additionally, the colors used were typically highly saturated. Although such stimuli are useful in isolating and studying depth cues in certain contexts, they leave open the question of whether the human visual system operates similarly for realistic objects. This paper presents the results of an experiment designed to explore the color-depth relationship for realistic, colored objects with varying shading and contour

    Evaluating Audience Engagement of an Immersive Performance on a Virtual Stage

    Get PDF
    Presenting theatrical performances in virtual reality (VR) has been an active area of research since the early 2000\u27s. VR provides a unique form of storytelling, which is made possible through the use of physically and digitally distributed 3D worlds. We describe a methodology for determining audience engagement in a virtual theatre performance. We use a combination of galvanic skin response (GSR) data, self-reported positive and negative affect schedule (PANAS), post-viewing reflection, and a think aloud method to assess user reaction to the virtual reality experience. In this study, we combine the implicit physiological data from GSR with explicit user feedback to produce a holistic metric for assessing immersion. Although the study evaluated a particular artistic work, the methodology of the study provides a foundation for conducting similar research. The combination of PANAS, self reflection, and the think aloud in conjunction with GSR data constitutes a novel approach in the study of live performance in virtual reality. The approach is also extendable to include other implicit measures such as pulse rate, blood pressure, or eye tracking. Our case study compares the experience of viewing the performance on a computer monitor to viewing with a head mounted display. Results showed statistically significant differences based on viewing platform in the PANAS self-report metric, as well as GSR measurements. Feedback obtained via the think aloud and reflection analysis also emphasized qualitative differences between the two viewing scenarios

    Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples

    Get PDF
    This paper presents several methods for shading meshes from scanned paint samples that represent dark to light transitions. Our techniques emphasize artistic control of brush stroke texture and color. We ïŹrst demonstrate how the texture of the paint sample can be separated from its color gradient. We demonstrate three methods, two real-time and one off-line for producing rendered, shaded images from the texture samples. All three techniques use texture synthesis to generate additional paint samples. Finally, we develop metrics for evaluating how well each method achieves our goal in terms of texture similarity, shading correctness and temporal coherence

    EllSeg: An Ellipse Segmentation Framework for Robust Gaze Tracking

    Full text link
    Ellipse fitting, an essential component in pupil or iris tracking based video oculography, is performed on previously segmented eye parts generated using various computer vision techniques. Several factors, such as occlusions due to eyelid shape, camera position or eyelashes, frequently break ellipse fitting algorithms that rely on well-defined pupil or iris edge segments. In this work, we propose training a convolutional neural network to directly segment entire elliptical structures and demonstrate that such a framework is robust to occlusions and offers superior pupil and iris tracking performance (at least 10%\% and 24%\% increase in pupil and iris center detection rate respectively within a two-pixel error margin) compared to using standard eye parts segmentation for multiple publicly available synthetic segmentation datasets

    GazeLens: Guiding Attention to Improve Gaze Interpretation in Hub-Satellite Collaboration

    Get PDF
    In hub-satellite collaboration using video, interpreting gaze direction is critical for communication between hub coworkers sitting around a table and their remote satellite colleague. However, 2D video distorts images and makes this interpretation inaccurate. We present GazeLens, a video conferencing system that improves hub coworkers’ ability to interpret the satellite worker’s gaze. A 360∘ camera captures the hub coworkers and a ceiling camera captures artifacts on the hub table. The system combines these two video feeds in an interface. Lens widgets strategically guide the satellite worker’s attention toward specific areas of her/his screen allow hub coworkers to clearly interpret her/his gaze direction. Our evaluation shows that GazeLens (1) increases hub coworkers’ overall gaze interpretation accuracy by 25.8% in comparison to a conventional video conferencing system, (2) especially for physical artifacts on the hub table, and (3) improves hub coworkers’ ability to distinguish between gazes toward people and artifacts. We discuss how screen space can be leveraged to improve gaze interpretation

    A THESIS ON TECHNIQUES FOR NON-PHOTOREALISTIC SHADING USING REAL PAINT

    Get PDF
    The goal of this research is to explore techniques for shading 3D computer gen-erated models using scanned images of actual paint samples. The techniques presented emphasize artistic control of brush stroke texture and color. We first demonstrate how the texture of a paint sample can be separated from its color transition. Four methods, three real-time and one off-line, for producing rendered images from the paint samples are then presented. Finally, we develop metrics for evaluating how well each method achieves our goal in terms of texture similarity, shading correctness, and temporal coherence

    Using value images to adjust intensity in 3d renderings and photographs

    No full text
    and the rendered image and maintains contrast in the rendered image. To represent the value-map function we use a piece-wise linear function (see Figure 1). We could use a higher-order function, suc
    corecore